Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.550
Filtrar
1.
Sci Rep ; 14(1): 7650, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561346

RESUMO

This study presents an advanced metaheuristic approach termed the Enhanced Gorilla Troops Optimizer (EGTO), which builds upon the Marine Predators Algorithm (MPA) to enhance the search capabilities of the Gorilla Troops Optimizer (GTO). Like numerous other metaheuristic algorithms, the GTO encounters difficulties in preserving convergence accuracy and stability, notably when tackling intricate and adaptable optimization problems, especially when compared to more advanced optimization techniques. Addressing these challenges and aiming for improved performance, this paper proposes the EGTO, integrating high and low-velocity ratios inspired by the MPA. The EGTO technique effectively balances exploration and exploitation phases, achieving impressive results by utilizing fewer parameters and operations. Evaluation on a diverse array of benchmark functions, comprising 23 established functions and ten complex ones from the CEC2019 benchmark, highlights its performance. Comparative analysis against established optimization techniques reveals EGTO's superiority, consistently outperforming its counterparts such as tuna swarm optimization, grey wolf optimizer, gradient based optimizer, artificial rabbits optimization algorithm, pelican optimization algorithm, Runge Kutta optimization algorithm (RUN), and original GTO algorithms across various test functions. Furthermore, EGTO's efficacy extends to addressing seven challenging engineering design problems, encompassing three-bar truss design, compression spring design, pressure vessel design, cantilever beam design, welded beam design, speed reducer design, and gear train design. The results showcase EGTO's robust convergence rate, its adeptness in locating local/global optima, and its supremacy over alternative methodologies explored.


Assuntos
Nativos do Alasca , Compressão de Dados , Lagomorpha , Animais , Humanos , Coelhos , Gorilla gorilla , Algoritmos , Benchmarking
2.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610365

RESUMO

High-quality cardiopulmonary resuscitation (CPR) and training are important for successful revival during out-of-hospital cardiac arrest (OHCA). However, existing training faces challenges in quantifying each aspect. This study aimed to explore the possibility of using a three-dimensional motion capture system to accurately and effectively assess CPR operations, particularly about the non-quantified arm postures, and analyze the relationship among them to guide students to improve their performance. We used a motion capture system (Mars series, Nokov, China) to collect compression data about five cycles, recording dynamic data of each marker point in three-dimensional space following time and calculating depth and arm angles. Most unstably deviated to some extent from the standard, especially for the untrained students. Five data sets for each parameter per individual all revealed statistically significant differences (p < 0.05). The correlation between Angle 1' and Angle 2' for trained (rs = 0.203, p < 0.05) and untrained students (rs = -0.581, p < 0.01) showed a difference. Their performance still needed improvement. When conducting assessments, we should focus on not only the overall performance but also each compression. This study provides a new perspective for quantifying compression parameters, and future efforts should continue to incorporate new parameters and analyze the relationship among them.


Assuntos
Reanimação Cardiopulmonar , Compressão de Dados , Humanos , Estudos de Viabilidade , Captura de Movimento , China
3.
PLoS One ; 19(4): e0301622, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630695

RESUMO

This paper proposes a reinforced concrete (RC) boundary beam-wall system that requires less construction material and a smaller floor height compared to the conventional RC transfer girder system. The structural performance of this system subjected to axial compression was evaluated by performing a structural test on four specimens of 1/2 scale. In addition, three-dimensional nonlinear finite element analysis was also performed to verify the effectiveness of the boundary beam-wall system. Three test parameters such as the lower wall length-to-upper wall length ratio, lower wall thickness, and stirrup details of the lower wall were considered. The load-displacement curve was plotted for each specimen and its failure mode was identified. The test results showed that decrease in the lower wall length-to-upper wall length ratio significantly reduced the peak strength of the boundary beam-wall system and difference in upper and lower wall thicknesses resulted in lateral bending caused by eccentricity in the out-of-plane direction. Additionally, incorporating cross-ties and reducing stirrup spacing in the lower wall significantly improved initial stiffness and peak strength, effectively minimizing stress concentration.


Assuntos
Materiais de Construção , Compressão de Dados , Análise de Elementos Finitos , Fenômenos Físicos
4.
J Acoust Soc Am ; 155(4): 2589-2602, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38607268

RESUMO

The processing and perception of amplitude modulation (AM) in the auditory system reflect a frequency-selective process, often described as a modulation filterbank. Previous studies on perceptual AM masking reported similar results for older listeners with hearing impairment (HI listeners) and young listeners with normal hearing (NH listeners), suggesting no effects of age or hearing loss on AM frequency selectivity. However, recent evidence has shown that age, independently of hearing loss, adversely affects AM frequency selectivity. Hence, this study aimed to disentangle the effects of hearing loss and age. A simultaneous AM masking paradigm was employed, using a sinusoidal carrier at 2.8 kHz, narrowband noise modulation maskers, and target modulation frequencies of 4, 16, 64, and 128 Hz. The results obtained from young (n = 3, 24-30 years of age) and older (n = 10, 63-77 years of age) HI listeners were compared to previously obtained data from young and older NH listeners. Notably, the HI listeners generally exhibited lower (unmasked) AM detection thresholds and greater AM frequency selectivity than their NH counterparts in both age groups. Overall, the results suggest that age negatively affects AM frequency selectivity for both NH and HI listeners, whereas hearing loss improves AM detection and AM selectivity, likely due to the loss of peripheral compression.


Assuntos
Compressão de Dados , Surdez , Perda Auditiva , Humanos , Mascaramento Perceptivo
5.
PLoS One ; 19(4): e0288296, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557995

RESUMO

Network traffic prediction is an important network monitoring method, which is widely used in network resource optimization and anomaly detection. However, with the increasing scale of networks and the rapid development of 5-th generation mobile networks (5G), traditional traffic forecasting methods are no longer applicable. To solve this problem, this paper applies Long Short-Term Memory (LSTM) network, data augmentation, clustering algorithm, model compression, and other technologies, and proposes a Cluster-based Lightweight PREdiction Model (CLPREM), a method for real-time traffic prediction of 5G mobile networks. We have designed unique data processing and classification methods to make CLPREM more robust than traditional neural network models. To demonstrate the effectiveness of the method, we designed and conducted experiments in a variety of settings. Experimental results confirm that CLPREM can obtain higher accuracy than traditional prediction schemes with less time cost. To address the occasional anomaly prediction issue in CLPREM, we propose a preprocessing method that minimally impacts time overhead. This approach not only enhances the accuracy of CLPREM but also effectively resolves the real-time traffic prediction challenge in 5G mobile networks.


Assuntos
Compressão de Dados , Redes Neurais de Computação , Algoritmos , Previsões
6.
IEEE Trans Image Process ; 33: 2502-2513, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38526904

RESUMO

Residual coding has gained prevalence in lossless compression, where a lossy layer is initially employed and the reconstruction errors (i.e., residues) are then losslessly compressed. The underlying principle of the residual coding revolves around the exploration of priors based on context modeling. Herein, we propose a residual coding framework for 3D medical images, involving the off-the-shelf video codec as the lossy layer and a Bilateral Context Modeling based Network (BCM-Net) as the residual layer. The BCM-Net is proposed to achieve efficient lossless compression of residues through exploring intra-slice and inter-slice bilateral contexts. In particular, a symmetry-based intra-slice context extraction (SICE) module is proposed to mine bilateral intra-slice correlations rooted in the inherent anatomical symmetry of 3D medical images. Moreover, a bi-directional inter-slice context extraction (BICE) module is designed to explore bilateral inter-slice correlations from bi-directional references, thereby yielding representative inter-slice context. Experiments on popular 3D medical image datasets demonstrate that the proposed method can outperform existing state-of-the-art methods owing to efficient redundancy reduction. Our code will be available on GitHub for future research.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento Tridimensional/métodos
7.
Neural Netw ; 174: 106250, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38531122

RESUMO

Snapshot compressive hyperspectral imaging necessitates the reconstruction of a complete hyperspectral image from its compressive snapshot measurement, presenting a challenging inverse problem. This paper proposes an enhanced deep unrolling neural network, called EDUNet, to tackle this problem. The EDUNet is constructed via the deep unrolling of a proximal gradient descent algorithm and introduces two innovative modules for gradient-driven update and proximal mapping reflectivity. The gradient-driven update module leverages a memory-assistant descent approach inspired by momentum-based acceleration techniques, for enhancing the unrolled reconstruction process and improving convergence. The proximal mapping is modeled by a sub-network with a cross-stage spectral self-attention, which effectively exploits the inherent self-similarities present in hyperspectral images along the spectral axis. It also enhances feature flow throughout the network, contributing to reconstruction performance gain. Furthermore, we introduce a spectral geometry consistency loss, encouraging EDUNet to prioritize the geometric layouts of spectral curves, leading to a more precise capture of spectral information in hyperspectral images. Experiments are conducted using three benchmark datasets including KAIST, ICVL, and Harvard, along with some real data, comprising a total of 73 samples. The experimental results demonstrate that EDUNet outperforms 15 competing models across four metrics including PSNR, SSIM, SAM, and ERGAS.


Assuntos
Compressão de Dados , Imageamento Hiperespectral , Fenômenos Físicos , Algoritmos , Movimento (Física)
8.
Neural Netw ; 174: 106220, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38447427

RESUMO

Structured pruning is a representative model compression technology for convolutional neural networks (CNNs), aiming to prune some less important filters or channels of CNNs. Most recent structured pruning methods have established some criteria to measure the importance of filters, which are mainly based on the magnitude of weights or other parameters in CNNs. However, these judgment criteria lack explainability, and it is insufficient to simply rely on the numerical values of the network parameters to assess the relationship between the channel and the model performance. Moreover, directly utilizing these pruning criteria for global pruning may lead to suboptimal solutions, therefore, it is necessary to complement search algorithms to determine the pruning ratio for each layer. To address these issues, we propose ARPruning (Attention-map-based Ranking Pruning), which reconstructs a new pruning criterion as the importance of the intra-layer channels and further develops a new local neighborhood search algorithm for determining the optimal inter-layer pruning ratio. To measure the relationship between the channel to be pruned and the model performance, we construct an intra-layer channel importance criterion by considering the attention map for each layer. Then, we propose an automatic pruning strategy searching method that can search for the optimal solution effectively and efficiently. By integrating the well-designed pruning criteria and search strategy, our ARPruning can not only maintain a high compression rate but also achieve outstanding accuracy. In our work, it is also experimentally concluded that compared with state-of-the-art pruning methods, our ARPruning method is capable of achieving better compression results. The code can be obtained at https://github.com/dozingLee/ARPruning.


Assuntos
Algoritmos , Compressão de Dados , Redes Neurais de Computação
9.
Bioinformatics ; 40(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38530800

RESUMO

MOTIVATION: The full automation of digital neuronal reconstruction from light microscopic images has long been impeded by noisy neuronal images. Previous endeavors to improve image quality can hardly get a good compromise between robustness and computational efficiency. RESULTS: We present the image enhancement pipeline named Neuronal Image Enhancement through Noise Disentanglement (NIEND). Through extensive benchmarking on 863 mouse neuronal images with manually annotated gold standards, NIEND achieves remarkable improvements in image quality such as signal-background contrast (40-fold) and background uniformity (10-fold), compared to raw images. Furthermore, automatic reconstructions on NIEND-enhanced images have shown significant improvements compared to both raw images and images enhanced using other methods. Specifically, the average F1 score of NIEND-enhanced reconstructions is 0.88, surpassing the original 0.78 and the second-ranking method, which achieved 0.84. Up to 52% of reconstructions from NIEND-enhanced images outperform all other four methods in F1 scores. In addition, NIEND requires only 1.6 s on average for processing 256 × 256 × 256-sized images, and images after NIEND attain a substantial average compression rate of 1% by LZMA. NIEND improves image quality and neuron reconstruction, providing potential for significant advancements in automated neuron morphology reconstruction of petascale. AVAILABILITY AND IMPLEMENTATION: The study is conducted based on Vaa3D and Python 3.10. Vaa3D is available on GitHub (https://github.com/Vaa3D). The proposed NIEND method is implemented in Python, and hosted on GitHub along with the testing code and data (https://github.com/zzhmark/NIEND). The raw neuronal images of mouse brains can be found at the BICCN's Brain Image Library (BIL) (https://www.brainimagelibrary.org). The detailed list and associated meta information are summarized in Supplementary Table S3.


Assuntos
Compressão de Dados , Neurônios , Animais , Camundongos , Tomografia Computadorizada por Raios X/métodos , Aumento da Imagem , Encéfalo , Processamento de Imagem Assistida por Computador/métodos
10.
Sci Rep ; 14(1): 6209, 2024 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-38485967

RESUMO

Efficient and rapid auxiliary diagnosis of different grades of lung adenocarcinoma is conducive to helping doctors accelerate individualized diagnosis and treatment processes, thus improving patient prognosis. Currently, there is often a problem of large intra-class differences and small inter-class differences between pathological images of lung adenocarcinoma tissues under different grades. If attention mechanisms such as Coordinate Attention (CA) are directly used for lung adenocarcinoma grading tasks, it is prone to excessive compression of feature information and overlooking the issue of information dependency within the same dimension. Therefore, we propose a Dimension Information Embedding Attention Network (DIEANet) for the task of lung adenocarcinoma grading. Specifically, we combine different pooling methods to automatically select local regions of key growth patterns such as lung adenocarcinoma cells, enhancing the model's focus on local information. Additionally, we employ an interactive fusion approach to concentrate feature information within the same dimension and across dimensions, thereby improving model performance. Extensive experiments have shown that under the condition of maintaining equal computational expenses, the accuracy of DIEANet with ResNet34 as the backbone reaches 88.19%, with an AUC of 96.61%, MCC of 81.71%, and Kappa of 81.16%. Compared to seven other attention mechanisms, it achieves state-of-the-art objective metrics. Additionally, it aligns more closely with the visual attention of pathology experts under subjective visual assessment.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Compressão de Dados , Neoplasias Pulmonares , Humanos , Benchmarking , Neoplasias Pulmonares/diagnóstico
11.
Nat Commun ; 15(1): 2376, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38491032

RESUMO

Despite the growing interest of archiving information in synthetic DNA to confront data explosion, quantitatively querying the data stored in DNA is still a challenge. Herein, we present Search Enabled by Enzymatic Keyword Recognition (SEEKER), which utilizes CRISPR-Cas12a to rapidly generate visible fluorescence when a DNA target corresponding to the keyword of interest is present. SEEKER achieves quantitative text searching since the growth rate of fluorescence intensity is proportional to keyword frequency. Compatible with SEEKER, we develop non-collision grouping coding, which reduces the size of dictionary and enables lossless compression without disrupting the original order of texts. Using four queries, we correctly identify keywords in 40 files with a background of ~8000 irrelevant terms. Parallel searching with SEEKER can be performed on a 3D-printed microfluidic chip. Overall, SEEKER provides a quantitative approach to conducting parallel searching over the complete content stored in DNA with simple implementation and rapid result generation.


Assuntos
Compressão de Dados , Ferramenta de Busca
12.
PLoS One ; 19(3): e0297154, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38446783

RESUMO

This study introduces a novel concrete-filled tube (CFT) column system featuring a steel tube comprised of four internal triangular units. The incorporation of these internal triangular units serves to reduce the width-thickness ratio of the steel tube and augment the effective confinement area of the infilled concrete. This design enhancement is anticipated to result in improved structural strength and ductility, contributing to enhanced overall performance and sustainability. To assess the effectiveness of the newly proposed column system, a full-scale test was conducted on five square steel tube column specimens subjected to axial compression. Among these specimens, two adhered to the conventional steel tube column design, while the remaining three featured the new CFT columns with internal triangular units. The shape of the CFT column, the presence of infilled concrete and the presence of openings on the ITUs were considered as test parameters. The test results reveal that the ductility of the newly proposed CFT column system exhibited a minimum 30% improvement compared to the conventional CFT column. In addition, the initial stiffness and axial compressive strength of the new system were found to be comparable to those of the conventional CFT column.


Assuntos
Compressão de Dados , Força Compressiva , Fenômenos Físicos , Aço , Resistência à Tração
13.
Sci Rep ; 14(1): 5168, 2024 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-38431641

RESUMO

Magnetic resonance imaging is a medical imaging technique to create comprehensive images of the tissues and organs in the body. This study presents an advanced approach for storing and compressing neuroimaging informatics technology initiative files, a standard format in magnetic resonance imaging. It is designed to enhance telemedicine services by facilitating efficient and high-quality communication between healthcare practitioners and patients. The proposed downsampling approach begins by opening the neuroimaging informatics technology initiative file as volumetric data and then planning it into several slice images. Then, the quantization hiding technique will be applied to each of the two consecutive slice images to generate the stego slice with the same size. This involves the following major steps: normalization, microblock generation, and discrete cosine transformation. Finally, it assembles the resultant stego slice images to produce the final neuroimaging informatics technology initiative file as volumetric data. The upsampling process, designed to be completely blind, reverses the downsampling steps to reconstruct the subsequent image slice accurately. The efficacy of the proposed method was evaluated using a magnetic resonance imaging dataset, focusing on peak signal-to-noise ratio, signal-to-noise ratio, structural similarity index, and Entropy as key performance metrics. The results demonstrate that the proposed approach not only significantly reduces file sizes but also maintains high image quality.


Assuntos
Compressão de Dados , Telemedicina , Humanos , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Razão Sinal-Ruído
14.
Sci Rep ; 14(1): 5087, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429300

RESUMO

When traditional EEG signals are collected based on the Nyquist theorem, long-time recordings of EEG signals will produce a large amount of data. At the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. The birth of compressed sensing alleviates this transmission pressure. However, using an iterative compressed sensing reconstruction algorithm for EEG signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in EEG signal rapid monitoring systems. As such, this paper presents a non-iterative and fast algorithm for reconstructing EEG signals using compressed sensing and deep learning techniques. This algorithm uses the improved residual network model, extracts the feature information of the EEG signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the EEG signal. The method proposed in this paper has been verified by simulation on the open BCI contest dataset. Overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional CS reconstruction algorithm and the existing deep learning reconstruction algorithm. In addition, it can realize the rapid reconstruction of EEG signals.


Assuntos
Compressão de Dados , Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Compressão de Dados/métodos , Algoritmos , Eletroencefalografia/métodos
15.
BMC Genomics ; 25(1): 266, 2024 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-38461245

RESUMO

BACKGROUND: DNA storage has the advantages of large capacity, long-term stability, and low power consumption relative to other storage mediums, making it a promising new storage medium for multimedia information such as images. However, DNA storage has a low coding density and weak error correction ability. RESULTS: To achieve more efficient DNA storage image reconstruction, we propose DNA-QLC (QRes-VAE and Levenshtein code (LC)), which uses the quantized ResNet VAE (QRes-VAE) model and LC for image compression and DNA sequence error correction, thus improving both the coding density and error correction ability. Experimental results show that the DNA-QLC encoding method can not only obtain DNA sequences that meet the combinatorial constraints, but also have a net information density that is 2.4 times higher than DNA Fountain. Furthermore, at a higher error rate (2%), DNA-QLC achieved image reconstruction with an SSIM value of 0.917. CONCLUSIONS: The results indicate that the DNA-QLC encoding scheme guarantees the efficiency and reliability of the DNA storage system and improves the application potential of DNA storage for multimedia information such as images.


Assuntos
Algoritmos , Compressão de Dados , Reprodutibilidade dos Testes , DNA/genética , Compressão de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos
16.
Bioinformatics ; 40(3)2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38377404

RESUMO

MOTIVATION: Seeding is a rate-limiting stage in sequence alignment for next-generation sequencing reads. The existing optimization algorithms typically utilize hardware and machine-learning techniques to accelerate seeding. However, an efficient solution provided by professional next-generation sequencing compressors has been largely overlooked by far. In addition to achieving remarkable compression ratios by reordering reads, these compressors provide valuable insights for downstream alignment that reveal the repetitive computations accounting for more than 50% of seeding procedure in commonly used short read aligner BWA-MEM at typical sequencing coverage. Nevertheless, the exploited redundancy information is not fully realized or utilized. RESULTS: In this study, we present a compressive seeding algorithm, named CompSeed, to fill the gap. CompSeed, in collaboration with the existing reordering-based compression tools, finishes the BWA-MEM seeding process in about half the time by caching all intermediate seeding results in compact trie structures to directly answer repetitive inquiries that frequently cause random memory accesses. Furthermore, CompSeed demonstrates better performance as sequencing coverage increases, as it focuses solely on the small informative portion of sequencing reads after compression. The innovative strategy highlights the promising potential of integrating sequence compression and alignment to tackle the ever-growing volume of sequencing data. AVAILABILITY AND IMPLEMENTATION: CompSeed is available at https://github.com/i-xiaohu/CompSeed.


Assuntos
Compressão de Dados , Software , Análise de Sequência de DNA/métodos , Algoritmos , Compressão de Dados/métodos , Computadores , Sequenciamento de Nucleotídeos em Larga Escala/métodos
17.
PLoS One ; 19(2): e0280486, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38394171

RESUMO

The mechanical properties of deep rock masses are significantly influenced by temperature and other factors. The effect of temperature on the strength of deep rock masses will pose a serious challenge to deep resource exploitation and engineering construction. In this paper, the thermal-mechanical coupling calculation model is established by particle flow code (PFC2D) to study the uniaxial compression response of rock masses with microcracks after temperature load. The strength of failure, microcracks, and strain was analyzed. The results show that: (i) When the soft rock thickness ratio Hs/H < 0.5, the displacement caused by the applied temperature is concentrated at the structural plane, and the contact force is concentrated at the end of the initial microcrack. When Hs/H ≥ 0.5, the displacement caused by the applied temperature is concentrated on both sides of the initial microcrack, and the contact force is concentrated in the hard rock area. (ii) The number of microcracks decreases with the increase of soft rock thickness under different working conditions. When the soft rock thickness ratio Hs/H < 0.5, the relationship curve between the number of microcracks and the vertical strain shows two stages of change. When Hs/H ≥ 0.5, the relationship curve between the number of cracks and the vertical strain changes shows three stages of change. (iii) When the soft rock thickness ratio Hs/H < 0.5, the failure strength decreases with the increase of soft rock thickness ratio at T = 100°C and 200°C. When T = 300°C and 400°C, the failure strength decreased first and then increased. When Hs/H ≥ 0.5, the failure strength increases with the increase of soft rock thickness at T = 200°C, 300°C, and 400°C. At T = 100°C, the failure strength decreases with the increase of soft rock thickness.


Assuntos
Compressão de Dados , Engenharia , Peso Molecular , Temperatura
18.
Neural Netw ; 172: 106013, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38354665

RESUMO

Many large and complex deep neural networks have been shown to provide higher performance on various computer vision tasks. However, very little is known about the relationship between the complexity of the input data along with the type of noise and the depth needed for correct classification. Existing studies do not address the issue of common corruptions adequately, especially in understanding what impact these corruptions leave on the individual part of a deep neural network. Therefore, we can safely assume that the classification (or misclassification) might be happening at a particular layer(s) of a network that accumulates to draw a final correct or incorrect prediction. In this paper, we introduce a novel concept of corruption depth, which identifies the location of the network layer/depth until the misclassification persists. We assert that the identification of such layers will help in better designing the network by pruning certain layers in comparison to the purification of the entire network which is computationally heavy. Through our extensive experiments, we present a coherent study to understand the processing of examples through the network. Our approach also illustrates different philosophies of example memorization and a one-dimensional view of sample or query difficulty. We believe that the understanding of the corruption depth can open a new dimension of model explainability and model compression, where in place of just visualizing the attention map, the classification progress can be seen throughout the network.


Assuntos
Compressão de Dados , Redes Neurais de Computação , Atenção
19.
J Xray Sci Technol ; 32(2): 475-491, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38393881

RESUMO

BACKGROUND: Digital X-ray imaging is essential for diagnosing osteoporosis, but distinguishing affected patients from healthy individuals using these images remains challenging. OBJECTIVE: This study introduces a novel method using deep learning to improve osteoporosis diagnosis from bone X-ray images. METHODS: A dataset of bone X-ray images was analyzed using a newly proposed procedure. This procedure involves segregating the images into regions of interest (ROI) and non-ROI, thereby reducing data redundancy. The images were then processed to enhance both spatial and statistical features. For classification, a Support Vector Machine (SVM) classifier was employed to distinguish between osteoporotic and non-osteoporotic cases. RESULTS: The proposed method demonstrated a promising Area under the Curve (AUC) of 90.8% in diagnosing osteoporosis, benchmarking favorably against existing techniques. This signifies a high level of accuracy in distinguishing osteoporosis patients from healthy controls. CONCLUSIONS: The proposed method effectively distinguishes between osteoporotic and non-osteoporotic cases using bone X-ray images. By enhancing image features and employing SVM classification, the technique offers a promising tool for efficient and accurate osteoporosis diagnosis.


Assuntos
Compressão de Dados , Osteoporose , Humanos , Raios X , Radiografia , Osteoporose/diagnóstico por imagem , Osso e Ossos
20.
Sci Rep ; 14(1): 3207, 2024 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-38332238

RESUMO

Many previous studies have investigated visual distance perception, especially for small to moderate distances. Few experiments, however, have evaluated the perception of large distances (e.g., 100 m or more). The studies that have been conducted have found conflicting results (diametrically opposite conclusions). In the current experiment, the functions relating actual and perceived distance were obtained for sixteen adult observers using the method of equal appearing intervals. These functions relating perceived and actual distance were obtained for outdoor viewing in a typical University environment-the experiment was conducted along a sidewalk adjacent to a typical street where campus buildings, trees, street signs, etc., were visible. The overall results indicated perceptual compression of distances in depth so that the stimulus distance intervals appeared significantly shorter than the actual (physical) distance intervals. It is important to note, however, that there were sizeable individual differences-the judgments of half of the observers were relatively accurate, whereas the judgments of the remaining half were inaccurate to varying degrees. The results of the experiment demonstrate that there is no single function that describes how human observers visually perceive large distance intervals in outdoor environments.


Assuntos
Compressão de Dados , Percepção Visual , Adulto , Humanos , Percepção de Distância , Julgamento , Individualidade , Percepção de Profundidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...